|
MathWorks Inc
reinforcement learning models ![]() Reinforcement Learning Models, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 94/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/reinforcement learning models/product/MathWorks Inc Average 94 stars, based on 1 article reviews
reinforcement learning models - by Bioz Stars,
2026-03
94/100 stars
|
Buy from Supplier |
|
SoftMax Inc
reinforcement learning model ![]() Reinforcement Learning Model, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/reinforcement learning model/product/SoftMax Inc Average 90 stars, based on 1 article reviews
reinforcement learning model - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
Abbott Laboratories
q-learning model ![]() Q Learning Model, supplied by Abbott Laboratories, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/q-learning model/product/Abbott Laboratories Average 90 stars, based on 1 article reviews
q-learning model - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
IEEE Access
deep reinforcement learning model ![]() Deep Reinforcement Learning Model, supplied by IEEE Access, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/deep reinforcement learning model/product/IEEE Access Average 90 stars, based on 1 article reviews
deep reinforcement learning model - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
Insilico Medicine
gai model gentrl ![]() Gai Model Gentrl, supplied by Insilico Medicine, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/gai model gentrl/product/Insilico Medicine Average 90 stars, based on 1 article reviews
gai model gentrl - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
Uber Technologies Inc
go-explore variants ![]() Go Explore Variants, supplied by Uber Technologies Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/go-explore variants/product/Uber Technologies Inc Average 90 stars, based on 1 article reviews
go-explore variants - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
neural network toolbox ![]() Neural Network Toolbox, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 96/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/neural network toolbox/product/MathWorks Inc Average 96 stars, based on 1 article reviews
neural network toolbox - by Bioz Stars,
2026-03
96/100 stars
|
Buy from Supplier |
|
Siemens AG
reinforcement learning models ![]() Reinforcement Learning Models, supplied by Siemens AG, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/reinforcement learning models/product/Siemens AG Average 90 stars, based on 1 article reviews
reinforcement learning models - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
Deepmind Technologies Ltd
recurrent network with model-free reinforcement learning ![]() Recurrent Network With Model Free Reinforcement Learning, supplied by Deepmind Technologies Ltd, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/recurrent network with model-free reinforcement learning/product/Deepmind Technologies Ltd Average 90 stars, based on 1 article reviews
recurrent network with model-free reinforcement learning - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
neural network toolbox tm ![]() Neural Network Toolbox Tm, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/neural network toolbox tm/product/MathWorks Inc Average 90 stars, based on 1 article reviews
neural network toolbox tm - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
matlab r2016a ![]() Matlab R2016a, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/matlab r2016a/product/MathWorks Inc Average 90 stars, based on 1 article reviews
matlab r2016a - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
|
Janssen
reinforcement learning models of cognition ![]() Reinforcement Learning Models Of Cognition, supplied by Janssen, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/reinforcement learning models of cognition/product/Janssen Average 90 stars, based on 1 article reviews
reinforcement learning models of cognition - by Bioz Stars,
2026-03
90/100 stars
|
Buy from Supplier |
Image Search Results
Journal: Proceedings of the National Academy of Sciences of the United States of America
Article Title: Callousness, exploitativeness, and tracking of cooperation incentives in the human default network.
doi: 10.1073/pnas.2307221121
Figure Lengend Snippet: Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Article Snippet:
Techniques: Modification